Search Results: "alpha"

13 August 2021

Leandro Doctors: Clojure CLI Tools in Debian - GSoC 2021 Partial Evaluation Report

NOTE: this blog post is based on my "Clojure CLI Tools in Debian" GSoC 2021 project Partial Evaluation Report.Hi, everybody!For those who don't know me, my name is Leandro Doctors (allentiak on IRC), and I'm the Debian Clojure Team's GSoC 2021 intern. My mentor is Louis-Philippe V ronneau[*] (pollo on IRC). My co-mentor is Utkarsh Gupta (utkarsh2102 on IRC). My 'no-mentor' :) is Elana Hashman (ehashman on IRC).In this message, I will summarize what I've been up to during the Coding I period (June 7th - July 16th).
TL;DR: my updated data-xml-clojure package was accepted[1] in experimental on Wednesday night :-)
[1] https://tracker.debian.org/news/1244465/accepted-data-xml-clojure-020alpha6-1-source-into-experimental/Now, let's tell the full story.During the first days of the "Coding I" phase, I did some research on the clj dependencies. And after the precious feedback from Louis-Philippe, Elana, Utkarsh, and Alex Miller (the upstream developer of clj, among many other libraries), I came up with the following packaging strategy.
  1. update org.clojure:data.xml (data-xml-clojure) to 0.2.0-beta6.
  2. package org.clojure:tools.gitlibs (tools-gitlibs-clojure).
  3. consider updating libjsch-agent-proxy-java, jgit, libtools-cli-clojure (all of them already in Debian).
  4. consider packaging com.cognitec.aws:* packages.
According to my original proposal, I should have completed all four tasks during Coding I. Looking back, the main lesson from these past weeks is a known classic: my timeline was too optimistic: I definitely underestimated the difficulty of the packaging process. Out of the four tasks, I only finished the first one.There were many challenges I had to overcome in order to update the library from version 0.0.8 to 0.2.0-alpha6:This what I did:First, I patched the source code so:Then, I completely overhauled the packaging code (this is, what goes inside debian/).All this improved the quality of the package.I also improved the Clojure Packaging Tutorial[2] to make the process easier to follow.[2] https://wiki.debian.org/Clojure/PackagingTutorialLooking back, it is almost as if I had started packaging the library from scratch...But, more that what I produced, I think the most important part of all this is what I learned during these weeks.As it was the first time I ever packaged anything in Debian, I had to learn the basics, bump by bump. (And, oh my, I surely did bump quite a few times!)[2] https://wiki.debian.org/Clojure/PackagingTutorial[3] https://wiki.debian.org/sbuild[4] https://manpages.debian.org/unstable/git-buildpackage/gbp.1.en.html[5] https://wiki.debian.org/UsingQuiltDefinitely, all this was worth it. After all, my updated data-xml-clojure package was accepted[1] in experimental on Wednesday :-)[1] https://tracker.debian.org/news/1244465/accepted-data-xml-clojure-020alpha6-1-source-into-experimental/Now that I've learned the basics of packaging bumped enough to get my package accepted, I'm hopeful I can ramp up and catch up with (at least most of) my original schedule during Coding II.tools-gitlibs-clojure, you're next :-)Thank you very much Louis-Philippe, Elana, and Utkarsh (plus a special mention to Alex) for your precious support during the last few weeks!

Leandro Doctors: Clojure CLI Tools in Debian - GSoC 2021 Partial Evaluation Report

NOTE: this blog post is based on my "Clojure CLI Tools in Debian" GSoC 2021 project Partial Evaluation Report.Hi, everybody!For those who don't know me, my name is Leandro Doctors (allentiak on IRC), and I'm the Debian Clojure Team's GSoC 2021 intern. My mentor is Louis-Philippe V ronneau[*] (pollo on IRC). My co-mentor is Utkarsh Gupta (utkarsh2102 on IRC). My 'no-mentor' :) is Elana Hashman (ehashman on IRC).In this message, I will summarize what I've been up to during the Coding I period (June 7th - July 16th).
TL;DR: my updated data-xml-clojure package was accepted[1] in experimental on Wednesday night :-)
[1] https://tracker.debian.org/news/1244465/accepted-data-xml-clojure-020alpha6-1-source-into-experimental/Now, let's tell the full story.During the first days of the "Coding I" phase, I did some research on the clj dependencies. And after the precious feedback from Louis-Philippe, Elana, Utkarsh, and Alex Miller (the upstream developer of clj, among many other libraries), I came up with the following packaging strategy.
  1. update org.clojure:data.xml (data-xml-clojure) to 0.2.0-beta6.
  2. package org.clojure:tools.gitlibs (tools-gitlibs-clojure).
  3. consider updating libjsch-agent-proxy-java, jgit, libtools-cli-clojure (all of them already in Debian).
  4. consider packaging com.cognitec.aws:* packages.
According to my original proposal, I should have completed all four tasks during Coding I. Looking back, the main lesson from these past weeks is a known classic: my timeline was too optimistic: I definitely underestimated the difficulty of the packaging process. Out of the four tasks, I only finished the first one.There were many challenges I had to overcome in order to update the library from version 0.0.8 to 0.2.0-alpha6:This what I did:First, I patched the source code so:Then, I completely overhauled the packaging code (this is, what goes inside debian/).All this improved the quality of the package.I also improved the Clojure Packaging Tutorial[2] to make the process easier to follow.[2] https://wiki.debian.org/Clojure/PackagingTutorialLooking back, it is almost as if I had started packaging the library from scratch...But, more that what I produced, I think the most important part of all this is what I learned during these weeks.As it was the first time I ever packaged anything in Debian, I had to learn the basics, bump by bump. (And, oh my, I surely did bump quite a few times!)[2] https://wiki.debian.org/Clojure/PackagingTutorial[3] https://wiki.debian.org/sbuild[4] https://manpages.debian.org/unstable/git-buildpackage/gbp.1.en.html[5] https://wiki.debian.org/UsingQuiltDefinitely, all this was worth it. After all, my updated data-xml-clojure package was accepted[1] in experimental on Wednesday :-)[1] https://tracker.debian.org/news/1244465/accepted-data-xml-clojure-020alpha6-1-source-into-experimental/Now that I've learned the basics of packaging bumped enough to get my package accepted, I'm hopeful I can ramp up and catch up with (at least most of) my original schedule during Coding II.tools-gitlibs-clojure, you're next :-)Thank you very much Louis-Philippe, Elana, and Utkarsh (plus a special mention to Alex) for your precious support during the last few weeks!

15 July 2021

Jonathan Dowland: Small tweaks to git branch behaviour

Despite my best efforts, I often end up with a lot of branches in my git repositories, many of which need cleaning up, but even so, may which don't. Two git configuration tweaks make the output of git branch much more useful for me. Motivational example, default git behaviour:
 git branch
  2021-apr-cpu-proposed
  OPENJDK-159-openj9-FROM
  OPENJDK-312-passwd
  OPENJDK-407-dnf-modules-fonts
  create_override_files_in_redhat_189
* develop
  inline-container-yaml
  local-modules
  mdrafiur-pr185-jolokia
  openjdk-containers-1.9
  openjdk-rm-jolokia
  osbs-openjdk
  release
  signing-intent-release
  ubi-1.3-mergedown
  ubi-11-singleton-jdk
  ubi8.2
  update-FROM-lines
  update-for-cct-module-changes-maven-etc
The default sort order is alphabetical, but that's never useful for the repositories I work in. The age of the branch is generally more useful. This particular example isn't that long, but often the number of branches can fill the screen. git can be configured to use columns for branch listings, which I think generally improves readability.
 git config --global branch.sort authordate
 git config --global column.branch auto
After:
 git branch
  update-for-cct-module-changes-maven-etc   signing-intent-release
  openjdk-rm-jolokia                        local-modules
  ubi8.2                                    mdrafiur-pr185-jolokia
  ubi-11-singleton-jdk                      OPENJDK-312-passwd
  ubi-1.3-mergedown                         create_override_files_in_redhat_189
  OPENJDK-159-openj9-FROM                   2021-apr-cpu-proposed
  openjdk-containers-1.9                    OPENJDK-407-dnf-modules-fonts
  inline-container-yaml                     release
  update-FROM-lines                       * develop
  osbs-openjdk

12 July 2021

Daniel Silverstone: Subplot - First public alpha release

This weekend we (Lars and I) finished our first public alpha release of Subplot. Subplot is a tool for helping you to document your acceptance criteria for a project in such a way that you can also produce a programmatic test suite for the verification criteria. We centre this around the concept of writing a Markdown document about your project, with the option to write Gherkin-like given/when/then scenarios inside which detail the automated verification of the acceptance criteria. This may sound very similar to Yarn, a similar concept which Lars, Richard, and I came up with in 2013. Critically back then we were very 'software engineer' focussed and so Yarn was a testing tool which happened to also produce reasonable documentation outputs if you squinted sideways and tried not to think too critically about them. Subplot on the other hand considers the documentation output to be just as important, if not more important, than the test suite output. Yarn was a tool which ran tests embedded in Markdown files, where Subplot is a documentation tool capable of extracting tests from an acceptance document for use in testing your project. The release we made is the first time we're actively asking other people to try Subplot and see whether the concept is useful to them. Obviously we expect there to be plenty of sharp corners and there's a good amount of functionality yet to implement to make Subplot as useful as we want it to be, but if you find yourself looking at a project and thinking "How do I make sure this is acceptable to the stakeholders without first teaching them how to read my unit tests?" then Subplot may be the tool for you. While Subplot can be used to produce test suites with functions written in Bash, Python, or Rust, the only language we're supporting as first-class in this release is Python. However I am personally most interested in the Rust opportunity as I see a lot of Rust programs very badly tested from the perspective of 'acceptance' as there is a tendency in Rust projects to focus on unit-type tests. If you are writing something in Rust and want to look at producing some high level acceptance criteria and yet still test in Rust, then please take a look at Subplot, particularly how we test subplotlib itself. Issues, feature requests, and perhaps most relevantly, code patches, gratefully received. A desire to be actively involved in shaping the second goal of Subplot even more so.

10 June 2021

Vincent Bernat: Serving WebP & AVIF images with Nginx

WebP and AVIF are two image formats for the web. They aim to produce smaller files than JPEG and PNG. They both support lossy and lossless compression, as well as alpha transparency. WebP was developed by Google and is a derivative of the VP8 video format.1 It is supported on most browsers. AVIF is using the newer AV1 video format to achieve better results. It is supported by Chromium-based browsers and has experimental support for Firefox.2

Your browser supports WebP and AVIF image formats. Your browser supports none of these image formats. Your browser only supports the WebP image format. Your browser only supports the AVIF image format.

Without JavaScript, I can t tell what your browser supports.

Converting and optimizing images For this blog, I am using the following shell snippets to convert and optimize JPEG and PNG images. Skip to the next section if you are only interested in the Nginx setup.

JPEG images JPEG images are converted to WebP using cwebp.
find media/images -type f -name '*.jpg' -print0 \
    xargs -0n1 -P$(nproc) -i \
      cwebp -q 84 -af ' ' -o ' '.webp
They are converted to AVIF using avifenc from libavif:
find media/images -type f -name '*.jpg' -print0 \
    xargs -0n1 -P$(nproc) -i \
      avifenc --codec aom --yuv 420 --min 20 --max 25 ' ' ' '.avif
Then, they are optimized using jpegoptim built with Mozilla s improved JPEG encoder, via Nix. This is one reason I love Nix.
jpegoptim=$(nix-build --no-out-link \
      -E 'with (import <nixpkgs> ); jpegoptim.override   libjpeg = mozjpeg;  ')
find media/images -type f -name '*.jpg' -print0 \
    sort -z
    xargs -0n10 -P$(nproc) \
      $ jpegoptim /bin/jpegoptim --max=84 --all-progressive --strip-all

PNG images PNG images are down-sampled to 8-bit RGBA-palette using pngquant. The conversion reduces file sizes significantly while being mostly invisible.
find media/images -type f -name '*.png' -print0 \
    sort -z
    xargs -0n10 -P$(nproc) \
      pngquant --skip-if-larger --strip \
               --quiet --ext .png --force
Then, they are converted to WebP with cwebp in lossless mode:
find media/images -type f -name '*.png' -print0 \
    xargs -0n1 -P$(nproc) -i \
      cwebp -z 8 ' ' -o ' '.webp
No conversion is done to AVIF: lossless compression is not as efficient as pngquant and lossy compression is only marginally better than what I get with WebP.

Keeping only the smallest files I am only keeping WebP and AVIF images if they are at least 10% smaller than the original format: decoding is usually faster for JPEG and PNG; and JPEG images can be decoded progressively.3
for f in media/images/**/*. webp,avif ; do
  orig=$(stat --format %s $ f%.* )
  new=$(stat --format %s $f)
  (( orig*0.90 > new ))   rm $f
done
I only keep AVIF images if they are smaller than WebP.
for f in media/images/**/*.avif; do
  [[ -f $ f%.* .webp ]]   continue
  orig=$(stat --format %s $ f%.* .webp)
  new=$(stat --format %s $f)
  (( $orig > $new ))   rm $f
done
We can compare how many images are kept when converted to WebP or AVIF:
printf "     %10s %10s %10s\n" Original WebP AVIF
for format in png jpg; do
  printf " $ format:u  %10s %10s %10s\n" \
    $(find media/images -name "*.$format"   wc -l) \
    $(find media/images -name "*.$format.webp"   wc -l) \
    $(find media/images -name "*.$format.avif"   wc -l)
done
AVIF is better than MozJPEG for most JPEG files while WebP beats MozJPEG only for one file out of two:
       Original       WebP       AVIF
 PNG         64         47          0
 JPG         83         40         74

Further reading I didn t detail my choices for quality parameters and there is not much science in it. Here are two resources providing more insight on AVIF:

Serving WebP & AVIF with Nginx To serve WebP and AVIF images, there are two possibilities:
  1. use <picture> to let the browser pick the format it supports, or
  2. use content negotiation to let the server send the best-supported format.
I use the second approach. It relies on inspecting the Accept HTTP header in the request. For Chrome, it looks like this:
Accept: image/avif,image/webp,image/apng,image/*,*/*;q=0.8
I configure Nginx to serve AVIF image, then the WebP image, and fallback to the original JPEG/PNG image depending on what the browser advertises:4
http  
  map $http_accept $webp_suffix  
    default        "";
    "~image/webp"  ".webp";
   
  map $http_accept $avif_suffix  
    default        "";
    "~image/avif"  ".avif";
   
 
server  
  # [ ]
  location ~ ^/images/.*\.(png jpe?g)$  
    add_header Vary Accept;
    try_files $uri$avif_suffix$webp_suffix $uri$avif_suffix $uri$webp_suffix $uri =404;
   
 
For example, let s suppose the browser requests /images/ont-box-orange@2x.jpg. If it supports WebP but not AVIF, $webp_suffix is set to .webp while $avif_suffix is set to the empty string. The server tries to serve the first existing file in this list:
  • /images/ont-box-orange@2x.jpg.webp
  • /images/ont-box-orange@2x.jpg
  • /images/ont-box-orange@2x.jpg.webp
  • /images/ont-box-orange@2x.jpg
If the browser supports both AVIF and WebP, Nginx walks the following list:
  • /images/ont-box-orange@2x.jpg.webp.avif (it never exists)
  • /images/ont-box-orange@2x.jpg.avif
  • /images/ont-box-orange@2x.jpg.webp
  • /images/ont-box-orange@2x.jpg
Eugene Lazutkin explains in more detail how this works. I have only presented a variation of his setup supporting both WebP and AVIF.

  1. VP8 is only used for lossy compression. Lossless compression is using an unrelated format.
  2. Firefox support was scheduled for Firefox 86 but because of the lack of proper color space support, it is still not enabled by default.
  3. Progressive decoding is not planned for WebP but could be implemented using low-quality thumbnail images for AVIF. See this issue for a discussion.
  4. The Vary header ensures an intermediary cache (a proxy or a CDN) checks the Accept header before using a cached response. Internet Explorer has trouble with this header and may not be able to cache the resource properly. There is a workaround but Internet Explorer s market share is now so small that it is pointless to implement it.

7 June 2021

Mike Gabriel: UBports: Packaging of Lomiri Operating Environment for Debian (part 05)

Before and during FOSDEM 2020, I agreed with the people (developers, supporters, managers) of the UBports Foundation to package the Unity8 Operating Environment for Debian. Since 27th Feb 2020, Unity8 has now become Lomiri. Recent Uploads to Debian related to Lomiri (Feb - May 2021) Over the past 4 months I attended 14 of the weekly scheduled UBports development sync sessions and worked on the following bits and pieces regarding Lomiri in Debian: The largest amount of work (and time) went into getting lomiri-ui-toolkit ready for upload. That code component is a absolutely massive beast and dearly intertwined with Qt5 (and unit tests fail with every new warning a new Qt5.x introduces). This bit of work I couldn't do alone (see below in "Credits" section). The next projects / packages ahead are some smaller packages (content-hub, gmenuharness, etc.) before we will finally come to lomiri (i.e. main bit of the Lomiri Operating Environment) itself. Credits Many big thanks go to everyone on the UBports project, but especially to Ratchanan Srirattanamet who lived inside of lomiri-ui-toolkit for more than two weeks, it seemed. Also, thanks to Florian Leeber for being my point of contact for topics regarding my cooperation with the UBports Foundation. Packaging Status The current packaging status of Lomiri related packages in Debian can be viewed at:
https://qa.debian.org/developer.php?login=team%2Bubports%40tracker.debia... light+love
Mike Gabriel (aka sunweaver)

17 May 2021

Dominique Dumont: Important bug fix for OpenSsh cme config editor

The new release of Config::Model::OpenSsh fixes a bugs that impacted experienced users: the order of Hosts or Match sections is now preserved when writing back ~/.ssh/config file. Why does this matter ? Well, the beginning of ssh_config man page mentions that For each parameter, the first obtained value will be used. and Since the first obtained value for each parameter is used, more host-specific declarations should be given near the beginning of the file, and general defaults at the end. . Looks like I missed these statements when I designed the model for OpenSsh configuration: the Host section was written back in a neat, but wrong, alphabetical order. This does not matter except when there an overlap between the specifications of the Host (or Match) sections like in the example below:
Host foo.company.com
Port 22
Host *.company.com
Port 10022
With this example, ssh connection to foo.company.com is done using port 22 and connection to bar.company.com with port 10022. If the Host sections are written back in reverse order:
Host *.company.com
Port 10022
Host foo.company.com
Port 22
Then, ssh would be happy to use the first matching section for foo.company.com , i.e. *.company.com and would use the wrong port (10022) This is now fixed with Config::Model::OpenSsh 2.8.4.3 which is available on cpan and in Debian/experimental. While I was at it, I ve also updated Managing OpenSsh configuration with cme wiki page. All the best

16 May 2021

Vincent Fourmond: Tutorial: analyze redox inactivations/reactivations

Redox-dependent inactivations are actually rather common in the field of metalloenzymes, and electrochemistry can be an extremely powerful tool to study them, providing one can analyze the data quantitatively. The point of this point is to teach the reader how to do so using QSoas. For more general information about redox inactivations and how to study them using electrochemical techniques, the reader is invited to read the review del Barrio and Fourmond, ChemElectroChem 2019. This post is a tutorial to learn the analysis of data coming from the study of the redox-dependent substrate inhibition of periplasmic nitrate reductase NapAB, which has the advantage of being relatively simple. The whole processed is discussed in Jacques et al, BBA, 2014. What you need to know in order to follow this tutorial is the following: You can download the data files from the GitHub repository. Before fitting the data to determine the values of the rate constants at the potentials of the experiment, we will first subtract the background current, assuming that the respective contributions of faradaic and non-faradaic currents is additive. Start QSoas, go to the directory where you saved the files, and load both the data file and the blank file thus:
QSoas> cd
QSoas> load 27.oxw
QSoas> load 27-blanc.oxw
QSoas> S 1 0
(after the first command, you have to manually select the directory in which you downloaded the data files). The S 1 0 command just subtracts the dataset 1 (the first loaded) from the dataset 0 (the last loaded), see more there. blanc is the French for blank... Then, we remove a bit of the beginning and the end of the data, corresponding to one half of the steps at \(E_0\), which we don't exploit much here (they are essentially only used to make sure that the irreversible loss is taken care of properly). This is done using strip-if:
QSoas> strip-if x<30 x>300
Then, we can fit ! The fit used is called fit-linear-kinetic-system, which is used to fit kinetic models with only linear reactions (like here) and steps which change the values of the rate constants but do not instantly change the concentrations. The specific command to fit the data is:
QSoas> fit-linear-kinetic-system /species=2 /steps=0,1,2,1,0
The /species=2 indicates that there are two species (A and I). The /steps=0,1,2,1,0 indicates that there are 5 steps, with three different conditions (0 to 2) in order 0,1,2,1,0. This fits needs a bit of setup before getting started. The species are numbered, 1 and 2, and the conditions (potentials) are indicated by #0, #1 and #2 suffixes. For the sake of simplicity, you can also simply load the starting-parameters.params parameters to have all setup the correct way. Then, just hit Fit, enjoy this moment when QSoas works and you don't have to... The screen should now look like this:
Now, it's done ! The fit is actually pretty good, and you can read the values of the inactivation and reactivation rate constants from the fit parameters. You can train also on the 21.oxw and 21-blanc.oxw files. Usually, re-loading the best fit parameters from other potentials as starting parameters work really well. Gathering the results of several fits into a real curve of rate constants as a function of potentials is left as an exercise for the reader (or maybe a later post), although you may find these series of posts useful in this context !
About QSoas QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050 5052. Current version is 3.0. You can download its source code there (or clone from the GitHub repository) and compile it yourself, or buy precompiled versions for MacOS and Windows there.

1 April 2021

Russell Coker: Censoring Images

A client asked me to develop a system for censoring images from an automatic camera. The situation is that we have a camera taking regular photos from a fixed location which includes part of someone else s property. So my client made a JPEG with some black rectangles in the sections that need to be covered. The first thing I needed to do was convert the JPEG to a PNG with transparency for the sections that aren t to be covered. To convert it I loaded the JPEG in the GIMP and went to the Layer->Transparency->Add Alpha Channel menu to enabled the Alpha channel. Then I selected the Bucket Fill tool and used Mode Erase and Fill by Composite and then clicked on the background (the part of the JPEG that was white) to make it transparent. Then I exported it to PNG. If anyone knows of an easy way to convert the file then please let me know. It would be nice if there was a command-line program I could run to convert a specified color (default white) to transparent. I say this because I can imagine my client going through a dozen iterations of an overlay file that doesn t quite fit. To censor the image I ran the composite command from imagemagick. The command I used was composite -gravity center overlay.png in.jpg out.jpg . If anyone knows a better way of doing this then please let me know. The platform I m using is a ARM926EJ-S rev 5 (v5l) which takes 8 minutes of CPU time to convert a single JPEG at full DSLR resolution (4 megapixel). It also required enabling swap on a SD card to avoid running out of RAM and running systemctl disable tmp.mount to stop using tmpfs for /tmp as the system only has 256M of RAM.

23 March 2021

Gunnar Wolf: Regarding the Stallman comeback

Context:
  1. Richard Stallman is the founder of the Free Software movement, and commited his life to making what seemed like a ludicrous idea into a tangible reality. We owe him big time for that, and nothing somebody says or does will ever eclipse the fact.
  2. But Richard Stallman has a very toxic personality. There is a long, well-known published list of abuse cases; if you must read more into it, some regarding his views on sex, consent, gender, and some other issues are published as a part of the open letter I am about to reference. I have witnessed quite a few; I won t disclose here the details, as many other incidents are already known. And I don t mean by this sexual abuse, although that s the twig that eventually broke the camel s back, but ranging from general rudeness to absolute lack of consideration for people around him.
  3. In September 2019, Stallman was Forced to resign, first from his position at MIT, then as the president of the FSF. The direct cause was a comment where he defended the accusations on Minsky (a personal friend of his, and deceased three years prior to the fact) of sexual abuse.
  4. Last week, 18 months after he was driven out of the FSF, and at LibrePlanet (FSF s signature conference, usually held at the MIT, this time naturally online only) Stallman announced his comeback to the Board of Directors of the FSF.
Many people (me included, naturally) in the Free Software world are very angry about this announcement. There is a call for signatures for a position statement presented by several free software leaders that has gathered, as I write this message, over 400 signatures. The Open Source Initiative has presented its institutional position statement. And I can only forecast this rejection will continue to grow. Free software was once the arena of young, raging alpha machos where a thick skin was an entry requirement. A good thing about growing up is that our community is now wiser, and although it still attracts younger people, there is a clear trend not to repeat our past ways. Free software has grown, and there is no place for a leader so disrespectful and hurting as many of us have witnessed Stallman to be. Again, the free software movement and the world as a whole owes a great deal to Stallman. He changed history. I admire his work, his persistence and his stubbornness. But I won t have him represent me.

28 February 2021

Chris Lamb: Free software activities in February 2021

Here is my monthly update covering what I have been doing in the free software world during February 2021 (previous month):

Reproducible Builds The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during compilation process by promising identical results are always generated from a given source, therefore allowing multiple third-parties to come to a consensus on whether a build was compromised. The project is proud to be a member project of the Software Freedom Conservancy. Conservancy acts as a corporate umbrella allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter. This month, I: I also made the following changes to diffoscope, including preparing and uploading versions 167 and 168 to Debian:

Debian Uploads I also sponsored an upload of adminer (4.7.9-1) for Alexandre Rossi. Debian LTS This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project. You can find out more about the project via the following video:

31 January 2021

Chris Lamb: Free software activities in January 2021

Here is my monthly update covering what I have been doing in the free software world during January 2021 (previous month):

Reproducible Builds One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. However, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into ostensibly secure software during the various compilation and distribution processes. The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. This month, I:
I also made the following changes to diffoscope, our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues, releasing version 164, version 165 and version 166 as well as triaging and merging many contributions from others:

Debian Uploads Debian LTS This month I worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project. You can find out more about the project via the following video:

22 December 2020

Norbert Preining: Debian KDE Status for Bullseye

A long journey has come to nice finish. 9 month ago I switched to KDE/Plasma, and started to package newer versions of it than available in Debian. Since then I have packaged every single version of Plasma, the KDE frameworks, and KDE Apps and made them available for Debian/unstable and Debian/testing via the OBS build server. Today, finally, I have uploaded Frameworks 5.77 and Plasma 5.20.4 to unstable, the end of a long story. Despite some initial disagreements with the Debian Qt/KDE Team, we found a modus vivendi, and since some months now I am member of the team and working together with the rest to get an uptodate KDE/Plasma system into Debian/bullseye. Thanks to everyone involved! The last weeks we have also worked on updating many of the KDE/Apps packages to the latest release 20.12.0, which means that as of now, Debian/unstable contains the most recent versions of KDE Frameworks, KDE Plasma, and of most KDE Apps. Thanks goes to all the team members, in particular to (in alphabetic order) Aur lien, Patrick, Pino, Sandro, and Scarlett for their work, and to all the testers and bug reporters. The current status is also more or less what we plan to get into Debian/Bullseye. An update to Frameworks 5.78 and Plasma 5.20.5 is still possible, but not decided by now. Concerning my OBS packages: they are mostly superseeded by now, and all but the KDE Apps package can be removed from the apt sources. The only remaining archive of interest is
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps2012/Debian_Unstable/ ./
and the same with Testing instead of Unstable for Debian/testing. For those adventurous, there is also the digikam-beta repository That s it. Have a nice Christmas, if you celebrate it, and a good start into a hopefully better 2021! Enjoy.

5 December 2020

Andrew Cater: Part way through Debian CD release process - always fun to do - Debian 10.7 in process

As is normal for these days: I post a quick update on how we're doing.Working through the test suite: RattusRattus, Sledge, Isy, Schweer and I. We've been joined by somebody new - jlsantos - who has done his first test for us :)A couple of changes: there's now a different automatic partitioning layout. /boot has been resized to ~500MB - to allow for more than one kernel but also the other initramfs files. The filesystem size allocated for a swap partition is now 1GB by default.We're doing fairly well - one hardware failure on an old laptop replaced by a newer model from Sledge's stock - everybody cheerful and all working well.Separately, there was a quiet release of Bullseye Alpha3 last night - that has had minimal testing - as ever, we're not yet near the release of Bullseye yet.

26 September 2020

Andrew Cater: Final post from media team for the day - most of the ordinary images and live images have been tested

Winding down slightly - we've worked our way through most of the images and testing. Schweer tested all of the Debian Edu/Skolelinux images for which many thanksSledge, RattusRattus, Isy and I have been working pretty much solidly for 10 3/4 hours. There's still some images to build - mips, mipsel and s390x but these are all images that we don't have hardware to test on particularly.
Another good and useful day - bits and pieces done throughout. NOTE: There appear to have been some security updates since the main release this morning so, as ever, it's worth updating machines on a regular basis.Waiting for the final images to finish building so that we can check the archive for completeness and then publish to the media mirrors. All the best until next time: thanks as ever to Sledge for his invaluable help. See you again in a couple of months in all likelihood. A much smaller release: some time in the next month we hope to be able to build and test an Alpha release for Bullseye. Bullseye is likely to be released somewhere round the middle of next year so we'll have additional Buster stable point releases in the meantime.

15 September 2020

Norbert Preining: GIMP washed out colors: Color to Alpha and layer recombination

Just to remind myself because I keep forgetting it again and again: If you get washed out colors when doing color to alpha and then recombine layers in GIMP, that is due to the new default in GIMP 2.10 that combines layers in linear RGB.
This creates problems because Color to Alpha works in perceptual RGB, and the recombination in linear creates washed out colors. The solution is to right click on the respective layer, select Composite Space , and there select RGB (perceptual) . Here is the bug report that has been open since 2 years. Hoping that for next time I remember it.

11 September 2020

Louis-Philippe V ronneau: Hire me!

I'm happy to announce I handed out my Master's Thesis last Monday. I'm not publishing the final copy just yet1, as it still needs to go through the approval committee. If everything goes well, I should have my Master of Economics diploma before Christmas! It sure hasn't been easy, and although I regret nothing, I'm also happy to be done with university. Looking for a job What an odd time to be looking for a job, right? Turns out for the first time in 12 years, I don't have an employer. It's oddly freeing, but also a little scary. I'm certainly not bitter about it though and it's nice to have some time on my hands to work on various projects and read things other than academic papers. Look out for my next blog posts on using the NeTV2 as an OSHW HDMI capture card, on hacking at security tokens and much more! I'm not looking for anything long term (I'm hoping to teach Economics again next Winter), but for the next few months, my calendar is wide open. For the last 6 years, I worked as Linux system administrator, mostly using a LAMP stack in conjunction with Puppet, Shell and Python. Although I'm most comfortable with Puppet, I also have decent experience with Ansible, thanks to my work in the DebConf Videoteam. I'm not the most seasoned Debian Developer, but I have some experience packaging Python applications and libraries. Although I'm no expert at it, lately I've also been working on Clojure packages, as I'm trying to get Puppet 6 in Debian in time for the Bullseye freeze. At the rate it's going though, I doubt we're going to make it... If your company depends on Puppet and cares about having a version in Debian 11 that is maintained (Puppet 5 is EOL in November 2020), I'm your guy! Oh, and I guess I'm a soon-to-be Master of Economics specialising in Free and Open Source Software business models and incentives theory. Not sure I'll ever get paid putting that in application, but hey, who knows. If any of that resonates with you, contact me and let's have a chat! I promise I don't bite :)

  1. The title of the thesis is What are the incentive structures of Free Software? An economic analysis of Free Software's specific development model. Once the final copy is approved, I'll be sure to write a longer blog post about my findings here.

31 August 2020

Jacob Adams: Command Line 101

How to Work in a Text-Only Environment.

What is this thing? When you first open a command-line (note that I use the terms command-line and shell interchangably here, they re basically the same, but command-line is the more general term, and shell is the name for the program that executes commands for you) you ll see something like this: thisfolder This line is called a command prompt and it tells you four pieces of information:
  1. jaadams: The username of the user that is currently running this shell.
  2. bg7: The name of the computer that this shell is running on, important for when you start accessing shells on remote machines.
  3. /tmp/thisfolder: The folder or directory that your shell is currently running in. Like a file explorer (like Window s Explorer or Mac s Finder) a shell always has a working directory, from which all relative paths (see sidenote below) are resolved.
When you first opened a shell, however, you might notice that is looks more like this: home This is a shorthand notation that the shell uses to make this output shorter when possible. ~ stands for your home directory, usually /home/<username>. Like C:\Users\<username>\ on Windows or /Users/<username> on Mac, this directory is where all your files should go by default. Thus a command prompt like this: downloads actually tells you that you are currently in the /home/jaadams/Downloads directory.

Sidenote: The Unix Filesystem and Relative Paths folders on Linux and other Unix-derived systems like MacOS are usually called directories. These directories are represented by paths, strings that indicate where the directory is on the filesystem. The one unusual part is the so-called root directory . All files are stored in this folder or directories under it. Its path is just / and there are no directories above it. For example, the directory called home typically contains all user directories. This is stored in the root directory, and each users specific data is stored in a directory named after that user under home. Thus, the home directory of the user jacob is typically /home/jacob, the directory jacob under the home directory stored in the root directory /. If you re interested in more details about what goes in what directory, man hier has the basics and the Filesystem Hierarchy Standard governs the layout of the filesystem on most Linux distributions. You don t always have to use the full path, however. If the path does not begin with a /, it is assumed that the path actually begins with the path of the current directory. So if you use a path like my/folders/here, and you re in the /home/jacob directory, the path will be treated like /home/jacob/my/folders/here. Each folder also contains the symbolic links .. and .. Symbolic links are a very powerful kind of file that is actually a reference to another file. .. always represents the parent directory of the current directory, so /home/jacob/.. links to /home/. . always links to the current directory, so /home/jacob/. links to /home/jacob.

Running commands To run a command from the command prompt, you type its name and then usually some arguments to tell it what to do. For example, the echo command displays the text passed as arguments.
jacob@lovelace/home/jacob$ echo hello world
hello world
Arguments to commands are space-separated, so in the previous example hello is the first argument and world is the second. If you need an argument to contain spaces, you ll want to put quotes around it, echo "like so". Certain arguments are called flags , or options (options if they take another argument, flags otherwise) usually prefixed with a hyphen, and they change the way a program operates. For example, the ls command outputs the contents of a directory passed as an argument, but if you add -l before the directory, it will give you more details on the files in that directory.
jacob@lovelace/tmp/test$ ls /tmp/test
1  2  3  4  5  6
jacob@lovelace/tmp/test$ ls -l /tmp/test
total 0
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 1
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 2
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 3
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 4
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 5
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 6
jacob@lovelace/tmp/test$
Most commands take different flags to change their behavior in various ways.

File Management
  • cd <path>: Change the current directory of the running shell to <path>.
  • ls <path>: Output the contents of <path>. If no path is passed, it prints the contents of the current directory.
  • touch <filename>: create an new empty file called <filename>. Used on an existing file, it updates the file s last accessed and modified times. Most text editors can also create a new file for you, which is probably more useful.
  • mkdir <directory>: Create a new folder/directory at path <directory>.
  • mv <src> <dest>: Move a file or directory at path <src> to <dest>.
  • cp <src> <dest>: Copy a file or directory at path <src> to <dest>.
  • rm <file>: Remove a file at path <file>.
  • zip -r <zipfile> <contents...>: Create a zip file <zipfile> with contents <contents>. <contents> can be multiple arguments, and you ll usually want to use the -r argument when including directories in your zipfile, as otherwise only the directory will be included and not the files and directories within it.

Searching
  • grep <thing> <file>: Look for the string <thing> in <file>. If no <file> is passed it searches standard input.
  • find <path> -name <name>: Find a file or directory called <name> somwhere under <path>. This command is actually very powerful, but also very complex. For example you can delete all files in a directory older than 30 days with:
    find -mtime +30 -exec rm  \;
    
  • locate <name>: A much easier to use command to find a file with a given name, but it is not usually installed by default.

Outputting Files
  • cat <files...>: Output (concatenate) all the files passed as arguments.
  • head <file>: Output the beginning of <file>
  • tail <file>: Output the end of <file>

How to Find the Right Command All commands (at least on sane Linux distributions like Debian or Ubuntu) are documented with a manual page, in man section 1 (for more information on manual sections, run man intro). This can be accessed using man <command> You can search for the right command using the -k flag, as in man -k <search>. You can also view manual pages in your browser, on sites like https://manpages.debian.org or https://linux.die.net/man. This is not always helpful, however, because some command s descriptions are not particularly useful, and also there are a lot of manual pages, which can make searching for a specific one difficult. For example, finding the right command to search inside text files is quite difficult via man (it s grep). When you can t find what you need with man I recommend falling back to searching the Internet. There are lots of bad Linux tutorials out there, but here are some reputable sources I recommend:
  • https://www.cyberciti.biz: nixCraft has excellent tutorials on all things Linux
  • Hosting providers like Digital Ocean or Linode: Good intro documentation, but can sometimes be outdated
  • https://tldp.org: The Linux Documentation project is great, but it can also be a little outdated sometimes.
  • https://stackoverflow.com: Oftentimes has great answers, but quality varies wildly since anyone can answer.
These are certainly not the only options but they re the sources I would recommend when available.

How to Read a Manual Page Manual pages consist of a series of sections, each with a specific purpose. Instead of attempting to write my own description here, I m going to borrow the excellent one from The Linux Documentation Project
The NAME section is the only required section. Man pages without a name section are as useful as refrigerators at the north pole. This section also has a standardized format consisting of a comma-separated list of program or function names, followed by a dash, followed by a short (usually one line) description of the functionality the program (or function, or file) is supposed to provide. By means of makewhatis(8), the name sections make it into the whatis database files. Makewhatis is the reason the name section must exist, and why it must adhere to the format I described. (Formatting explanation cut for brevity) The SYNOPSIS section is intended to give a short overview on available program options. For functions this sections lists corresponding include files and the prototype so the programmer knows the type and number of arguments as well as the return type. The DESCRIPTION section eloquently explains why your sequence of 0s and 1s is worth anything at all. Here s where you write down all your knowledge. This is the Hall Of Fame. Win other programmers and users admiration by making this section the source of reliable and detailed information. Explain what the arguments are for, the file format, what algorithms do the dirty jobs. The OPTIONS section gives a description of how each option affects program behaviour. You knew that, didn t you? The FILES section lists files the program or function uses. For example, it lists configuration files, startup files, and files the program directly operates on. (Cut details about installing files) The ENVIRONMENT section lists all environment variables that affect your program or function and tells how, of course. Most commonly the variables will hold pathnames, filenames or default options. The DIAGNOSTICS section should give an overview of the most common error messages from your program and how to cope with them. There s no need to explain system error error messages (from perror(3)) or fatal signals (from psignal(3)) as they can appear during execution of any program. The BUGS section should ideally be non-existent. If you re brave, you can describe here the limitations, known inconveniences and features that others may regard as misfeatures. If you re not so brave, rename it the TO DO section ;-) The AUTHOR section is nice to have in case there are gross errors in the documentation or program behaviour (Bzzt!) and you want to mail a bug report. The SEE ALSO section is a list of related man pages in alphabetical order. Conventionally, it is the last section.

Remote Access One of the more powerful uses of the shell is through ssh, the secure shell. This allows you to remotely connect to another computer and run a shell on that machine:
user@host:~$ ssh other@example.com
other@example:~$
The prompt changes to reflect the change in user and host, as you can see in the example above. This allows you to work in a shell on that machine as if it was right in front of you.

Moving Files Between Machines There are several ways you can move files between machines over ssh. The first and easiest is scp, which works much like the cp command except that paths can also take a user@host argument to move files across computers. For example, if you wanted to move a file test.txt to your home directory on another machine, the command would look like:
scp test.txt other@example.com:
(The home directory is the default path) Otherwise you can move files by reversing the order of the arguments and put a path after the colon to move files from another directory on the remote host. For example, if you wanted to fetch the file /etc/issue.net from example.com:
scp other@example.com:/etc/issue.net .
Another option is the sftp command, which gives you a very simple shell-like interface in which you can cd and ls, before either puting files onto the local machine or geting files off of it. The final and most powerful option is rsync which syncs the contents of one directory to another, and doesn t copy files that haven t changed. It s powerful and complex, however, so I recommend reading the USAGE section of its man page.

Long-Running Commands The one problem with ssh is that it will stop any command running in your shell when you disconnect. If you want to leave something on and come back later then this can be a problem. This is where terminal multiplexers come in. tmux and screen both allow you to run a shell in a safe environment where it will continue even if you disconnect from it. You do this by running the command without any arguments, i.e. just tmux or just screen. In tmux you can disconnect from the current session by pressing Ctrl+b then d, and reattach with the tmux attach command. screen works similarly, but with Ctrl+a instead of b and screen -r to reattach.

Command Inputs and Outputs Arguments are not the only way to pass input to a command. They can also take input from what s called standard input , which the shell usually connects to your keyboard. Output can go to two places, standard output and standard error, both of which are directed to the screen by default.

Redirecting I/O Note that I said above that standard input/output/error are only usually connected to the keyboard and the terminal? This is because you can redirect them to other places with the shell operators <, > and the very powerful .

File redirects The operators < and > redirect the input and output of a command to a file. For example, if you wanted a file called list.txt that contained a list of all the files in a directory /this/one/here you could use:
ls /this/one/here > list.txt

Pipelines The pipe character, , allows you to direct the output of one command into the input of another. This can be very powerful. For example, the following pipeline lists the contents of the current directory searches for the string test , then counts the number of results. (wc -l counts the number of lines in its input)
ls   grep test   wc -l
For a better, but even more contrived example, say you have a file myfile, with a bunch of lines of potentially duplicated and unsorted data
test
test
1234
4567
1234
You can sort it and output only the unique lines with sort and uniq:
$ uniq < myfile   sort
1234
1234
4567
test

Save Yourself Some Typing: Globs and Tab-Completion Sometimes you don t want to type out the whole filename when writing out a command. The shell can help you here by autocompleting when you press the tab key. If you have a whole bunch of files with the same suffix, you can refer to them when writing arguments as *.suffix. This also works with prefixes, prefix*, and in fact you can put a * anywhere, *middle*. The shell will expand that * into all the files in that directory that match your criteria (ending with a specific suffix, starting with a specific prefix, and so on) and pass each file as a separate argument to the command. For example, if I have a series of files called 1.txt, 2.txt, and so on up to 9, each containing just the number for which it s named, I could use cat to output all of them like so:
jacob@lovelace/tmp/numbers$ ls
1.txt  2.txt  3.txt  4.txt  5.txt  6.txt  7.txt  8.txt	9.txt
jacob@lovelace/tmp/numbers$ cat *.txt
1
2
3
4
5
6
7
8
9
Also the ~ shorthand mentioned above that refers to your home directory can be used when passing a path as an argument to a command.

Ifs and For loops The files in the above example were generated with the following shell commands:
for i in 1 2 3 4 5 6 7 8 9
do
echo $i > $i.txt
done
But I ll have to save variables, conditionals and loops for another day because this is already too long. Needless to say the shell is a full programming language, although a very ugly and dangerous one.

12 April 2020

Dirk Eddelbuettel: #24: Test, test, test, those R 4.0.0 binaries with Ubuntu and Rocker

Welcome to the 24nd post in the relentlessly regular R ravings series, or R4 for short. R 4.0.0 will be released in less than two weeks, and testing is very important. I had uploaded two alpha release builds (at the end of March and a good week ago) as well as a first beta release yesterday, all to the Debian experimental distribution (as you can see here) tracking the release schedule set by Peter Dalgaard. Because R 4.0.0 will require reinstallation of all packages, it makes some sense to use a spare machine. Or a Docker container. So to support that latter mode, I have now complemented the binaries created from the r-base source package with all base and recommended packages, providing a starting point for actually running simple tests. Which is what we do in the video, using again the R on Ubuntu (18.04) Rocker container:

Slides from the video are at this link. This container based on 18.04 is described here on the Docker Hub; a new 20.04 container with the pre-release of the next Ubuntu LTS should be there shortly once it leaves the build queue. What we showed does of course also work on direct Ubuntu (or Debian, using those source repos) installations; the commands shown in the Rocker use case generally apply equally to a normal installation. If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

30 March 2020

Axel Beckert: How do you type on a keyboard with only 46 or even 28 keys?

Some of you might have noticed that I m into keyboards since a few years ago into mechanical keyboards to be precise. Preface It basically started with the Swiss Mechanical Keyboard Meetup (whose website I started later on) was held in the hackerspace of the CCCZH. I mostly used TKL keyboards (i.e. keyboards with just the for me useless number block missing) and tried to get my hands on more keyboards with Trackpoints (but failed so far). At some point a year or two ago, I looking into smaller keyboards for having a mechanical keyboard with me when travelling. I first bought a Vortex Core at Candykeys. The size was nice and especially having all layers labelled on the keys was helpful, but nevertheless I soon noticed that the smaller the keyboards get, the more important is, that they re properly programmable. The Vortex Core is programmable, but not the keys in the bottom right corner which are exactly the keys I wanted to change to get a cursor block down there. (Later I found out that there are possibilities to get this done, either with an alternative firmware and a hack of it or desoldering all switches and mounting an alternative PCB called Atom47.) 40% Keyboards So at some point I ordered a MiniVan keyboard from The Van Keyboards (MiniVan keyboards will soon be available again at The Key Dot Company), here shown with GMK Paperwork (also bought from and designed by The Van Keyboards):
The MiniVan PCBs are fully programmable with the free and open source firmware QMK and started to use that more and more instead of bigger keyboards. Layers With the MiniVan I learned the concepts of layers. Layers are similar to what many laptop keyboards do with the Fn key and to some extent also what the German standard layout does with the AltGr key: Layers are basically alternative key maps you can switch with a special key (often called Fn , Fn1 , Fn2 , etc., or especially if there are two additional layers Raise and Lower ). There are several concepts how these layers can be reached with these keys: My MiniVan Layout For the MiniVan, two additional layers suffice easily, but since I have a few characters on multiple layers and also have mouse control and media keys crammed in there, I have three additional layers on my MiniVan keyboards:

TRNS means transparent, i.e. use the settings from lower layers.
I also use a feature that allows me to mind different actions to a key depending if I just tap the key or if I hold it. Some also call this tap dance . This is especially very popular on the usually rather huge spacebar. There, the term SpaceFn has been coined, probably after this discussion on Geekhack. I use this for all my layer switching keys: With this layout I can type English texts as fast as I can type them on a standard or TKL layout. German umlauts are a bit more difficult because it requires 4 to 6 key presses per umlaut as I use the Compose key functionality (mapped to the Menu key between the spacebars and the cursor block. So to type an on my MiniVan, I have to:
  1. press and release Menu (i.e. Compose); then
  2. press and hold either Shift-Spacebar (i.e. Shift-Fn1) or Slash (i.e. Fn2), then
  3. press N for a double quote (i.e. Shift-Fn1-N or Fn2-N) and then release all keys, and finally
  4. press and release the base character for the umlaut, in this case Shift-A.
And now just use these concepts and reduce the amount of keys to 28: 30% and Sub-30% Keyboards In late 2019 I stumbled upon a nice little keyboard kit shop on Etsy which I (and probably most other people in the mechanical keyboard scene) didn t take into account for looking for keyboards called WorldspawnsKeebs. They offer mostly kits for keyboards of 40% size and below, most of them rather simple and not expensive. For about 30 you get a complete sub-30% keyboard kit (without switches and keycaps though, but that very common for keyboard kits as it leaves the choice of switches and key caps to you) named Alpha28 consisting of a minimal Acrylic case and a PCB and electronics set. This Alpha28 keyboard is btw. fully open source as the source code, (i.e. design files) for the hardware are published under a free license (MIT license) on GitHub. And here s how my Alpha28 looks like with GMK Mitolet (part of the GMK Pulse group-buy) key caps:
So we only have character keys, Enter (labelled Data as there was no 1u Enter key with that row profile in that key cap set; I ll also call it Data for the rest of this posting) and a small spacebar, not even modifier keys. The Default Alpha28 Layout The original key layout by the developer of the Alpha28 used the spacbar as Shift on hold and as space if just tapped, and the Data key switches always to the next layer, i.e. it switches the layer permanently on tap and not just on hold. This way that key rotates through all layers. In all other layers, V switches back to the default layer. I assume that the modifiers on the second layer are also on tap and apply to the next other normal key. This has the advantage that you don t have to bend your fingers for some key combos, but you have to remember on which layer you are at the moment. (IIRC QMK allows you to show that via LEDs or similar.) Kinda just like vi. My Alpha28 Layout But maybe because I m more an Emacs person, I dislike remembering states myself and don t bind bending my fingers. So I decided to develop my own layout using tap-or-hold and only doing layer switches by holding down keys:

A triangle means that the settings from lower layers are used, N/A means the key does nothing.
It might not be very obvious, but on the default layer, all keys in the bottom row and most keys on the row ends have tap-or-hold configurations. Basic ideasBottom row if holdOther rows if holdHow the keys are divided into layersUsing the Alpha28 This layout works surprisingly well for me. Only for Minus, Equal, Single Quote and Semicolon I still often have to think or try if they re on Layer 1 or 2 as on my 40%s (MiniVan, Zlant, etc.) I have them all on layer 1 (and in general one layer less over all). And for really seldom used keys like Insert, PrintScreen, ScrollLock or Pause, I might have to consult my own documentation. They re somewhere in the middle of the keyboard, either on layer 1, 2, or 3. ;-) And of course, typing umlauts takes even two keys more per umlaut as on the MiniVan since on the one hand Menu is not on the default layer and on the other hand, I don t have this nice shifted number row and actually have to also press Shift to get a double quote. So to type an on my Alpha, I have to:
  1. press and release Space-F (i.e. Fn1-F) for Menu (i.e. Compose); then
  2. press and hold A-Spacebar-L (i.e. Shift-Fn1-L) for getting a double quote, then
  3. press and release the base character for the umlaut, in this case L-A for Shift-A (because we can t use A for Shift as I can t hold a key and then press it again :-).
Conclusion If the characters on upper layers are not labelled like on the Vortex Core, i.e. especially on all self-made layouts, typing is a bit like playing that old children s game Memory: as soon as you remember (or your muscle memory knows) where some special characters are, typing gets faster. Otherwise, you start with trial and error or look the documentation. Or give up. ;-) Nevertheless, typing on a sub-30% keyboard like the Alpha28 is much more difficult and slower than on a 40% keyboard like the MiniVan. So the Alpha28 very likely won t become my daily driver while the MiniVan defacto is my already my daily driver. But I like these kind of challenges as others like the game Memory . So I ordered three more 30% and sub-30% keyboard kits and WorldspawnsKeebs for soldering on the upcoming weekend during the COVID19 lockdown: And if I at some point want to try to type with even fewer keys, I ll try a Butterstick keyboard with just 20 keys. It s a chorded keyboard where you have to press multiple keys at the same time to get one charcter: So to get an A from the missing middle row, you have to press Q and Z simultaneously, to get Escape, press Q and W simultaneously, to get Control, press Q, W, Z and X simultaneously, etc. And if that s not even enough, I already bought a keyboard kit named Ginny (or Ginni, the developer can t seem to decide) with just 10 keys from an acquaintance. Couldn t resist when offered his surplus kits. :-) It uses the ASETNIOP layout which was initially developed for on-screen keyboards on tablets.

Next.

Previous.